Controlled Markov chains with non-exponential discounting and distribution-dependent costs

نویسندگان

چکیده

This paper deals with a controlled Markov chain in continuous time non-exponential discounting and distribution-dependent cost functional. A definition of closed-loop equilibrium is given its existence uniqueness are established. Due to the time-inconsistency brought by distribution dependence, it proved that locally optimal some appropriate sense. Moreover, shown our problem equivalent mean-field game for infinite-many symmetric players cost.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Consumption-investment strategies with non-exponential discounting and logarithmic utility

In this paper, we revisit the consumption-investment problem with a general discount function and a logarithmic utility function in a non-Markovian framework. The coefficients in our model, including the interest rate, appreciation rate and volatility of the stock, are assumed to be adapted stochastic processes. Following Yong (2012a,b)’s method, we study an N-person differential game. We adopt...

متن کامل

Controlled Markov chains with safety upper bound

In this paper we introduce and study the notion of safety control of stochastic discrete event systems (DESs), modeled as controlled Markov chains. For non-stochastic DESs, modeled by state machines or automata, safety is specified as a set of forbidden states, or equivalently by a binary valued vector that imposes an upper bound on the set of states permitted to be visited. We generalize this ...

متن کامل

Markov Chains , Coupling , Stationary Distribution

In this lecture, we will introduce Markov chains and show a potential algorithmic use of Markov chains for sampling from complex distributions. For a finite state space Ω, we say a sequence of random variables (Xt) on Ω is a Markov chain if the sequence is Markovian in the following sense, for all t, all x0, . . . , xt, y ∈ Ω, we require Pr(Xt+1 = y|X0 = x0, X1 = x1, . . . , Xt = xt) = Pr(Xt+1 ...

متن کامل

Denumerable controlled Markov chains with strong average optimality criterion: Bounded & unbounded costs

This paper studies discrete-time nonlinear controlled stochastic systems, modeled by controlled Markov chains (CMC) with denumerable state space and compact action space, and with an infinite planning horizon. Recently, there has been a renewed interest in CMC with a long-run, expected average cost (AC) optimality criterion. A classical approach to study average optimality consists in formulati...

متن کامل

Controlled Markov Chains, Graphs, and Hamiltonicity

This manuscript summarizes a line of research that maps certain classical problems of discrete mathematics — such as the Hamiltonian Cycle and the Traveling Salesman Problems — into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ESAIM: Control, Optimisation and Calculus of Variations

سال: 2021

ISSN: ['1262-3377', '1292-8119']

DOI: https://doi.org/10.1051/cocv/2021003